108 research outputs found

    Reasoning About Liquids via Closed-Loop Simulation

    Full text link
    Simulators are powerful tools for reasoning about a robot's interactions with its environment. However, when simulations diverge from reality, that reasoning becomes less useful. In this paper, we show how to close the loop between liquid simulation and real-time perception. We use observations of liquids to correct errors when tracking the liquid's state in a simulator. Our results show that closed-loop simulation is an effective way to prevent large divergence between the simulated and real liquid states. As a direct consequence of this, our method can enable reasoning about liquids that would otherwise be infeasible due to large divergences, such as reasoning about occluded liquid.Comment: Robotics: Science & Systems (RSS), July 12-16, 2017. Cambridge, MA, US

    Visual Closed-Loop Control for Pouring Liquids

    Full text link
    Pouring a specific amount of liquid is a challenging task. In this paper we develop methods for robots to use visual feedback to perform closed-loop control for pouring liquids. We propose both a model-based and a model-free method utilizing deep learning for estimating the volume of liquid in a container. Our results show that the model-free method is better able to estimate the volume. We combine this with a simple PID controller to pour specific amounts of liquid, and show that the robot is able to achieve an average 38ml deviation from the target amount. To our knowledge, this is the first use of raw visual feedback to pour liquids in robotics.Comment: To appear at ICRA 201

    Cosys-AirSim: A Real-Time Simulation Framework Expanded for Complex Industrial Applications

    Full text link
    Within academia and industry, there has been a need for expansive simulation frameworks that include model-based simulation of sensors, mobile vehicles, and the environment around them. To this end, the modular, real-time, and open-source AirSim framework has been a popular community-built system that fulfills some of those needs. However, the framework required adding systems to serve some complex industrial applications, including designing and testing new sensor modalities, Simultaneous Localization And Mapping (SLAM), autonomous navigation algorithms, and transfer learning with machine learning models. In this work, we discuss the modification and additions to our open-source version of the AirSim simulation framework, including new sensor modalities, vehicle types, and methods to generate realistic environments with changeable objects procedurally. Furthermore, we show the various applications and use cases the framework can serve.Comment: Accepted at Annual Modeling and Simulation Conference, ANNSIM 202

    Lagrangian Neural Style Transfer for Fluids

    Full text link
    Artistically controlling the shape, motion and appearance of fluid simulations pose major challenges in visual effects production. In this paper, we present a neural style transfer approach from images to 3D fluids formulated in a Lagrangian viewpoint. Using particles for style transfer has unique benefits compared to grid-based techniques. Attributes are stored on the particles and hence are trivially transported by the particle motion. This intrinsically ensures temporal consistency of the optimized stylized structure and notably improves the resulting quality. Simultaneously, the expensive, recursive alignment of stylization velocity fields of grid approaches is unnecessary, reducing the computation time to less than an hour and rendering neural flow stylization practical in production settings. Moreover, the Lagrangian representation improves artistic control as it allows for multi-fluid stylization and consistent color transfer from images, and the generality of the method enables stylization of smoke and liquids likewise.Comment: ACM Transaction on Graphics (SIGGRAPH 2020), additional materials: http://www.byungsoo.me/project/lnst/index.htm

    Oligodendrocytes: biology and pathology

    Get PDF
    Oligodendrocytes are the myelinating cells of the central nervous system (CNS). They are the end product of a cell lineage which has to undergo a complex and precisely timed program of proliferation, migration, differentiation, and myelination to finally produce the insulating sheath of axons. Due to this complex differentiation program, and due to their unique metabolism/physiology, oligodendrocytes count among the most vulnerable cells of the CNS. In this review, we first describe the different steps eventually culminating in the formation of mature oligodendrocytes and myelin sheaths, as they were revealed by studies in rodents. We will then show differences and similarities of human oligodendrocyte development. Finally, we will lay out the different pathways leading to oligodendrocyte and myelin loss in human CNS diseases, and we will reveal the different principles leading to the restoration of myelin sheaths or to a failure to do so

    Liquids & Robots: An Investigation of Techniques for Robotic Interaction with Liquids

    No full text
    Thesis (Ph.D.)--University of Washington, 2018Liquids are an important part of everyday human environments. We use them for common tasks such as pouring coffee, mixing ingredients for a recipe, or washing hands. For a robot to operate effectively on such tasks, it must be able to robustly handle liquids. In this thesis, we investigate ways in which robots can overcome some of the challenges inherent in interacting with liquids. We investigate how robots can perceive, reason about, and manipulate liquids. We split this research into two parts. The first part focuses on investigating how learning- based methods can be used to solve tasks involving liquids. The second part focuses on how model-based methods may be used and how learning- and model-based methods may be combined. In the first part of this thesis we investigate how deep learning can be adapted to tasks involving liquids. We develop several deep network architectures for the task of detection, a liquid perception task wherein the robot must label pixels in its color camera as liquid or not- liquid. Our results show that networks able to integrate temporal information have superior performance to those that do not, indicating that this may be necessary for the perception of translucent liquids. Additionally, we apply our network architectures to the related task of tracking, a liquid reasoning task where the robot must identify the pixel locations of all liquid, seen and unseen, in an image based on its learned knowledge of liquid physics. Our results show that the best performing network was one with an explicit memory, suggesting that liquid reasoning tasks may be easier to solve when passing explicit state information forward in time. Finally, we apply our deep learning architectures to the task of pouring specific amounts of liquid, a manipulation task requiring precise control. The results show that by using our deep neural networks, the robot was able to pour specific amounts of liquid using only RGB feedback. In the second part of this thesis we investigate model-based methods for robotic inter- action with liquids. Specifically, we focus on physics-based models that incorporate fluid dynamics algorithms. We show how a robot can use a liquid simulator to track the 3D state of liquid over time. By using a strong model, the robot is able to reason in two entirely different contexts using the exact same algorithm: in one case, about the amount of water in a container during a pour action, in the other, about a blockage in an opaque pipe. We extend our strong, physics-based liquid model by creating SPNets. SPNets is an implementation of fluid dynamics with deep learning tools, allowing it to be seamlessly integrated with deep networks as well as enabling fully differentiable fluid dynamics. Our results show that the gradients produced from this model can be used to discover fluid parameters (e.g., viscosity, cohesion) from data, precisely control liquids to move them to desired poses, and train policies directly from the model. We also show how this can be integrated with deep networks to perceive and track the 3D liquid state. To summarize, this thesis investigates both learning-based and model-based approaches to robotic interaction with liquids. Our results with deep learning, a learning-based approach, show that deep neural networks are proficient at learning to perceive liquids from raw sensory data and at learning basic physical properties of liquids. Our results with liquid simulation, a model-based approach, show that physics-based models are very good at generalizing to a wide variety of tasks. And finally out results with combining these two show how the generalizability of models may be combined with the adaptability of deep learning to enable the application of several robotics methodologies

    Intelligence tests for robots: Solving perceptual reasoning tasks with a humanoid robot

    No full text
    Intelligence test scores have long been shown to correlate with a wide variety of other abilities. The goal of this thesis is to enable a robot to solve some of the common tasks from intelligence tests with the intent of improving its performance on other real-world tasks. In other words, the goal of this thesis is to make robots more intelligent. We used an upper-torso humanoid robot to solve three common perceptual reasoning tasks: the object pairing task, the order completion task, and the matrix completion task. Each task consisted of a set of objects arranged in a specific configuration. The robot's job was to select the correct solution from a set of candidate solutions. To find the solution, the robot first performed a set of stereotyped exploratory behaviors on each object, while recording from its auditory, proprioceptive, and visual sensory modalities. It used this information to compute a set of similarity scores between every pair of objects. Given these similarity scores, the robot was able to deduce patterns in the arrangement of the objects, which enabled it to solve the given task. The robot repeated this process for all the tasks that we presented to it. We found that the robot was able to solve all the different types of tasks with a high degree of accuracy. There have been previous computational solutions to tasks from intelligence tests, but no solutions thus far have used a robot. This thesis is the first work to attempt to solve tasks from intelligence tests using an embodied approach. We identified a framework for solving perceptual reasoning tasks, and we showed that it can be successfully used to solve a variety of such tasks. Due to the strong correlation between intelligence test scores and performance in real-world environments, this suggests that an embodied approach to learning can be very useful for solving a wide variety of tasks from real-world environments in addition to tasks from intelligence tests.</p
    corecore